# Sinhala pretraining
Sinhalaberto
This is a relatively small model trained on the deduplicated OSCAR Sinhala dataset, providing foundational support for the low-resource Sinhala language.
Large Language Model Other
S
keshan
34
1
Sinbert Small
MIT
SinBERT is a model pretrained on a large Sinhala monolingual corpus (sin-cc-15M) based on the RoBERTa architecture, suitable for Sinhala text processing tasks.
Large Language Model
Transformers Other

S
NLPC-UOM
126
4
Featured Recommended AI Models